78 research outputs found

    The Design of Equal Complexity FIR Perfect Reconstruction Filter Banks Incorporating Symmetries

    Get PDF
    In this report, we present a new approach to the design of perfect reconstruction filter banks (PRFB’s) which have equal length FIR analysis and synthesis filters. To achieve perfect reconstruction, necessary and sufficient conditions are incorporated directly in a numerical design procedure as a set of quadratic equality constraints among the impulse response coefficients of the filters. Any symmetry inherent in a particular application, such as quadrature mirror symmetry, linear phase, or symmetry between analysis and synthesis filters, may be exploited to reduce the number of variables and constraints in the design problem. A novel feature of our new approach is that it allows the design of filter banks that perform functions other than flat passband band-splitting

    Seq-UPS: Sequential Uncertainty-aware Pseudo-label Selection for Semi-Supervised Text Recognition

    Full text link
    This paper looks at semi-supervised learning (SSL) for image-based text recognition. One of the most popular SSL approaches is pseudo-labeling (PL). PL approaches assign labels to unlabeled data before re-training the model with a combination of labeled and pseudo-labeled data. However, PL methods are severely degraded by noise and are prone to over-fitting to noisy labels, due to the inclusion of erroneous high confidence pseudo-labels generated from poorly calibrated models, thus, rendering threshold-based selection ineffective. Moreover, the combinatorial complexity of the hypothesis space and the error accumulation due to multiple incorrect autoregressive steps posit pseudo-labeling challenging for sequence models. To this end, we propose a pseudo-label generation and an uncertainty-based data selection framework for semi-supervised text recognition. We first use Beam-Search inference to yield highly probable hypotheses to assign pseudo-labels to the unlabelled examples. Then we adopt an ensemble of models, sampled by applying dropout, to obtain a robust estimate of the uncertainty associated with the prediction, considering both the character-level and word-level predictive distribution to select good quality pseudo-labels. Extensive experiments on several benchmark handwriting and scene-text datasets show that our method outperforms the baseline approaches and the previous state-of-the-art semi-supervised text-recognition methods.Comment: Accepted at WACV 202
    • …
    corecore